So now we'll come to what kind of agents we're going to look at.
Remember an agent is just basically a mathematical function, but we're going to look at agent
architecture, which means we're going to draw nice pictures with squares in them and they
have squares in the squares and those have labels and then we can reason of what kind
of components you need to do which kind of tasks.
And of course we need to fix an agent architecture that we can actually run an agent program
on that, which basically means what are the information or other resources we have access
to.
And we're going to look at four large classes and kind of have a fifth class that is a learning
agent, which kind of has a meta, we think of the learning process as a meta process.
Any of those agents can actually be turned into learning agents by adding a learning
component.
And we're going to use these kind of pictures.
Since we're concentrating on the agent, that square is big and the environment we're not
concentrating on, that was five minutes ago, we're making that small.
And so the simplest agent we can imagine is what we call a simple reflex agent.
And what that does is actually we have a function from percepts, not percepts, sequences to
actions.
The thing you associate with reflexes that you have, you put your hand onto a hot stove
and you care only about one percept and pull back your hand.
You're not actually saying, oh, how was the weather yesterday and did my mother tell me
anything about hot stove and so on.
That's already a little bit more.
A reflex agent and typically, say, these one cell organisms, they are reflex agents, typically.
If things get hot there, they move that way.
Or if the water becomes too acidic in one place, they try to move away, stuff like that.
A percept acidic action, move your worm person and so on.
And we can model that as having sensors that really tells us about how the world is now.
Ow, hot.
We have a simple model of the world and depending on what the world state is for one cell organisms
is mostly ow or not ow.
And then we have a couple of condition action rules that says if ow then go, if not ow then
stay.
That's what a very simple reflex agent does.
It conserves energy because if it's not ow then that's all.
And this is essentially a very simple, you could think of that as a table algorithm.
Only it might be a little bit more efficient depending on what we use actually as a representation
of the world we have there.
It could be complicated, we'll come into that in a second.
And by the way, the vacuum cleaner agent is a simple reflex agent.
Basically it has if dirty then suck, if clean then move.
Right?
That's all you have to do and that's actually what powers the Roomba.
Then they have a battery almost empty, go home.
Why would you need more?
Okay, but your sensors might pick that up.
A reflex agent with good sensors, is that all we need?
Any ideas?
There's this thing.
Do you want to base your decisions on previous knowledge?
Presenters
Zugänglich über
Offener Zugang
Dauer
00:29:48 Min
Aufnahmedatum
2020-10-27
Hochgeladen am
2020-10-27 10:47:00
Sprache
en-US
Introduction of the different agent types and why they are necessary.